home | section main page


differential equation

Table of Contents

1. Introduction

A differential equation is an equation whose solutions are functions and which incorporate derivatives of the function you're solving for. Differential equations often have a family of infinite solutions, where a general solution for a differential equation incorporates many particular solutions. Particular solutions to differential equations are specific functions, corresponding to a single choice of an initial value problem. Therefore, general solutions tell you how to solve initial value problems.

Differential equations are often used to model real world systems, and are the main tool in numerical simulations of said systems.

1.1. ODE

An ODE is a differential equation involving single variable function solution classes and their derivatives.

1.2. PDE

A PDE is a differential equation involving multivariable function solution classes and generally involve partial derivatives of the unknown function.

1.3. initial value problem

An initial value problem is a problem where one is given a differential equation and particular values of the unknown function and particular values of its derivatives, and the result is a particular solution.

1.4. separable differential equation

For ODEs, separable differential equations are differential equations of the form:

dydx=f(y)g(x)\begin{aligned}\label{}\frac{dy}{dx} = f(y)g(x)\end{aligned}

which can be solved by taking an integral on both sides:

dyf(y)=g(x)dxdyf(y)=g(x)dx\begin{aligned}\label{}\frac{dy}{f(y)} = g(x)dx \\\int\frac{dy}{f(y)} = \int g(x)dx\end{aligned}

evaluating the integrals and solving for yy, you obtain solutions for yy in terms of xx.

1.5. linear differential equation

Linear differential equations are differential equations of the form:

[ifi(x)Di]y(x)=g(x)\begin{aligned}\label{}[\sum_{i}f_{i}(x)D^{i}]y(x) = g(x)\end{aligned}

where DD is the derivative operator. They are linear because the unknown function y(x)y(x) is being operated on by some linear operator, and common methods in linear algebra can be used in order to analyze equations of this kind. For example, a first-order linear differential equation would look like this:

[f(x)D+g(x)]y(x)=h(x)\begin{aligned}\label{}[f(x)D + g(x)]y(x) = h(x)\end{aligned}

which can be easily solved in the following way where G(x)=g(x)f(x)G(x) = \frac{g(x)}{f(x)} and H(x)=h(x)f(x)H(x) = \frac{h(x)}{f(x)}:

[D+G(x)]y(x)=H(x)μ(x)[D+G(x)]y(x)=μ(x)H(x)μ(x):=G(x)μ(x)D(μ(x)y(x))=μ(x)H(x)y(x)=μ(x)H(x)dxμ(x)\begin{aligned}\label{}[D + G(x)]y(x) = H(x) \\\mu(x)[D + G(x)]y(x) = \mu(x)H(x) \\\mu'(x) := G(x)\mu(x) \\D(\mu(x)y(x)) = \mu(x)H(x) \\y(x) = \frac{\int\mu(x)H(x)dx}{\mu(x)}\end{aligned}

Now to solve for μ(x)\mu(x), using separable differential equation methods:

dμdx=G(x)μ(x)1μdμ=G(x)dx1μdμ=G(x)dxln(μ)=G(x)dxeG(x)dx=μ\begin{aligned}\label{}\frac{d\mu}{dx} = G(x)\mu(x) \\\frac{1}{\mu}d\mu = G(x)dx \\\int\frac{1}{\mu}d\mu = \int G(x)dx \\ln(\mu) = \int G(x)dx \\e^{\int G(x)dx} = \mu\end{aligned}

Therefore:

y(x)=eG(x)dxH(x)dxeG(x)dx\begin{aligned}\label{}y(x) = \frac{\int e^{\int G(x)dx}H(x)dx}{e^{\int G(x)dx}}\end{aligned}

Then, to model any particular first order system, plug in functions for G(x)G(x) and H(x)H(x).

1.5.1. superposition principle

The principle of superposition states that any solutions fi(x)f_i(x) add to a new solution:

i=0Nfi(x)=fnew(x)\begin{aligned}\label{}\sum_{i=0}^{N}f_{i}(x) = f_{new}(x)\end{aligned}

that also satisfies the linear differential equation. This works because the operator is linear, so additive properties work over this space.

1.5.2. Higher Order Linear Differential Equations

Solving higher order linear differential equations requires a couple of tricks. For example, transforms such as the Laplace Transform or the Fourier Transform may be used. Such transforms reduce differential equation problems to algebraic problems, thus simplifying their solution methods. Other methods include guessing (I'm not pulling your leg here, this is real), formulation as an eigenvalue problem, and taylor polynomial solutions. We will take a look at all of these in this section.

1.5.3. Homogeneous Case

Take the case Ay+By+Cy=0Ay'' + By' + Cy = 0, the substitute the form y=Dekty = De^{kt}. Then:

Dekt(Ak2+Bk+C)=0Ak2+Bk+C=0\begin{aligned}\label{}De^{kt}(Ak^{2} + Bk + C) = 0 \\Ak^{2} + Bk + C = 0\end{aligned}

Then, use the quadratic formula to solve for kk in terms of the other constants. Such a polynomial is called the characteristic polynomial of this differential equation.

1.5.4. Eigenvalue Problems

Eigenvalue problems can be solved just like in the familiar linear algebra case. For instance, take some differential equation in this form:

A(f)=λf\begin{aligned}\label{}A(f) = \lambda f\end{aligned}

where AA is a linear operator in function space, and λ\lambda is any constant. Traditionally, one would solve such an eigenvalue problem like so:

det(AλI)=0\begin{aligned}\label{}\det{(A - \lambda I)} = 0\end{aligned}

In the simple example of a polynomial basis, this function ff can be represented as some linear combination of linearly independent polynomials. A simple basis to choose could be the Taylor series basis i.e. en=xn\vec{e_{n}} = x^{n} where ene_{n} is the nth basis vector. Note that there are many polynomial bases that are an orthogonal basis and span this subset of function space, but this is a simple example. In this case, the matrix AA would represent an operation on an infinite polynomial, and the λI\lambda I tells you to subtract λ\lambda from all its diagonals. You can interpret this literally, using the following example.

1.5.4.1. Example
D(r2D(f(r))=λf(r)\begin{aligned}\label{}D(r^{2}D(f(r)) = \lambda f(r)\end{aligned}

is such an example of an eigenvalue problem. Now, using the Taylor basis, we need to know two things: what DD is as an infinite dimensional matrix in this basis, and what t2t^{2} is as an infinite dimensional matrix. ff is some unknown vector we are trying to solve for in this system. Note this observation:

(01000002000003000004)(abcde)=(b2c3d4e)\begin{aligned}\label{}\begin{pmatrix}0 & 1 & 0 & 0 & 0 \\0 & 0 & 2 & 0 & 0 \\0 & 0 & 0 & 3 & 0 \\0 & 0 & 0 & 0 & 4 \\\end{pmatrix}\begin{pmatrix}a \\b \\c \\d \\e\end{pmatrix}=\begin{pmatrix}b \\2c \\3d \\4e\end{pmatrix}\end{aligned}

That this matrix encodes the power rule for the Taylor eigenbasis for 4 dimensions; each entry in the vectors encodes the nth power monomial term, which means, for example:

(abcde):=a+bx+cx2+dx3+ex4\begin{aligned}\label{}\begin{pmatrix}a \\b \\c \\d \\e\end{pmatrix} := a + bx + cx^{2} + dx^{3} + ex^{4}\end{aligned}

then the derivative of this vector would be:

b+2cx+3dx2+4ex3\begin{aligned}\label{derivative}b + 2cx + 3dx^{2} + 4ex^{3}\end{aligned}

which is exactly the coefficients in the resultant vector! Now, if we generalize this to an infinite amount of dimensions (where the vector has an infinite length and the matrix has infinite entries), this corresponds to the same effect.

Thus, the infinite matrix with the off-diagonal increasing entries is the DD matrix, or DD operator. But what is the r2r^{2} operator? We know it must be a matrix operation that shifts the entire vector up by two, and pads the first two entries of the vector with two zeros. If we find this matrix, the matrix multiplication DRDDRD should yield a new infinite matrix MM, which we can use in order to solve the eigenvalue problem det(MλI)=0\det{(M - \lambda I)} = 0. Now this matrix is:

(0000000000100000100000100)\begin{aligned}\label{R matrix}\begin{pmatrix}0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 \\1 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 \\0 & 0 & 1 & 0 & 0 \\\end{pmatrix}\end{aligned}

and so on. This matrix does the exact same thing to a polynomial vector as what multiplying r2r^{2} does to a polynomial. We then multiply the two matrices to get this new matrix:

(0000000000100000100000100)(0100000200000300000400000)=(0000000000010000020000030)\begin{aligned}\label{new}\begin{pmatrix}0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 \\1 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 \\0 & 0 & 1 & 0 & 0 \\\end{pmatrix}\begin{pmatrix}0 & 1 & 0 & 0 & 0 \\0 & 0 & 2 & 0 & 0 \\0 & 0 & 0 & 3 & 0 \\0 & 0 & 0 & 0 & 4 \\0 & 0 & 0 & 0 & 0\end{pmatrix} =\begin{pmatrix}0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 \\0 & 0 & 2 & 0 & 0 \\0 & 0 & 0 & 3 & 0 \\\end{pmatrix}\end{aligned}

and so on, as you can see the pattern. Now we multiply in another DD:

(0100000200000300000400000)(0000000000010000020000030)=(00000021000003200000430000054)\begin{aligned}\label{DS}\begin{pmatrix}0 & 1 & 0 & 0 & 0 \\0 & 0 & 2 & 0 & 0 \\0 & 0 & 0 & 3 & 0 \\0 & 0 & 0 & 0 & 4 \\0 & 0 & 0 & 0 & 0\end{pmatrix}\begin{pmatrix}0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 \\0 & 0 & 2 & 0 & 0 \\0 & 0 & 0 & 3 & 0 \\\end{pmatrix} =\begin{pmatrix}0 & 0 & 0 & 0 & 0 \\0 & 2 \cdot 1 & 0 & 0 & 0 \\0 & 0 & 3 \cdot 2 & 0 & 0 \\0 & 0 & 0 & 4 \cdot 3 & 0 \\0 & 0 & 0 & 0 & 5 \cdot 4\end{pmatrix}\end{aligned}

of course the 545 \cdot 4 isn't actually resultant from the image above, but it is from the infinite version of this process. Now, we can finally subtract λ\lambda from this infinite matrix:

(λ0000021λ0000032λ0000043λ0000054λ)\begin{aligned}\label{lambda}\begin{pmatrix}-\lambda & 0 & 0 & 0 & 0 \\0 & 2 \cdot 1 - \lambda & 0 & 0 & 0 \\0 & 0 & 3 \cdot 2 - \lambda & 0 & 0 \\0 & 0 & 0 & 4 \cdot 3 - \lambda & 0 \\0 & 0 & 0 & 0 & 5 \cdot 4 - \lambda\end{pmatrix}\end{aligned}

Now, you might be wondering how we're going to take the determinant of this infinite matrix. We can take a limit of finite matrices to find out what the generalization might be. For instance, the 3d case might look like this:

λ2(21λ)\begin{aligned}\label{}\lambda^{2}(2\cdot 1 - \lambda)\end{aligned}

(as the very last diagonal entry in the finite case does not extend infinitely, there is no 323 \cdot 2 term). Now taking some higher dimensions:

λ2(21λ)(32λ)λ2(21λ)(32λ)(43λ)λ2(21λ)(32λ)(43λ)(54λ)\begin{aligned}\label{higher dimensions}\lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda) \\\lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda) \\\lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda)(5 \cdot 4 - \lambda)\end{aligned}

using inductive reasoning we should expect the infinite form to be:

det(M)=λ(21λ)(32λ)(43λ)(54λ)(65λ)\begin{aligned}\label{}det(M) = -\lambda(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda)(5 \cdot 4 - \lambda)(6 \cdot 5 - \lambda)\dots\end{aligned}

(note that it isn't λ2\lambda^{2} because the very last λ-\lambda never gets multiplied, and it's negative for that reason too). Note that if we want to set det(M)=0det(M) = 0, λ=n(n1)\lambda = n(n - 1) where nn is a natural number (including zero). Then we substitute back in the λ\lambda for some nn, let's use n=2n = 2 as an example:

(200000000000400000100000018)(a1a2a3a4a5)=(00000)\begin{aligned}\label{n=2}\begin{pmatrix}-2 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 4 & 0 & 0 \\0 & 0 & 0 & 10 & 0 \\0 & 0 & 0 & 0 & 18\end{pmatrix}\begin{pmatrix}a_{1} \\a_{2} \\a_{3} \\a_{4} \\a_{5} \\\end{pmatrix} =\begin{pmatrix}0 \\0 \\0 \\0 \\0\end{pmatrix}\end{aligned}

clearly, when we choose n=2n = 2, the second value a2a_{2} is free, and the rest for the given eigenfunction must be zero, meaning for a given nn, the resulting eigenvector is axnax^{n} for any value aa. This is one of the solutions to this differential equation.

It turns out there's another solution in a space that the Taylor space does not span, but I'll leave it as an exercise to find the other solution using this method, by extending it to include other kinds of functions. Note that for your eigenbasis one can use the Fourier Transform to make a Fourier basis, but that's also easy to generalize.